recent

3 ways companies can scale emissions reduction

Women’s career advice: Remember that exhaustion is not a yardstick for productivity

How, and why, to run a values-based business

Credit: Rob Dobi

Ideas Made to Matter

Artificial Intelligence

MIT experts recommend policies for safe, effective use of AI

By

Whether it’s President Joe Biden’s executive order on artificial intelligence in the U.S. or the Artificial Intelligence Act in the European Union, regulating AI is top of mind for many governments — and businesses need to pay attention. 

In four new policy briefs, MIT experts look broadly at how to safely deploy artificial intelligence and, more specifically, at how to regulate large language models, label AI-generated content, and pursue “pro-worker AI.”

The joint effort by the MIT Schwarzman College of Computing and the MIT Washington Office aims to “help shape a technically informed discussion of how to govern AI in a way that will make it safe while enabling AI to thrive.” Here is an overview of the four documents.

A framework for AI governance in the U.S.

In proposing their framework, MIT experts note two objectives: maintaining U.S. AI leadership, which is “vital to economic advancement and national security,” while achieving beneficial deployment of AI. Beneficial AI requires that security, individual privacy and autonomy, safety, shared prosperity, and democratic and civic values be prioritized.

Dan Huttenlocher, dean of the MIT Schwarzman College of Computing; Asu Ozdaglar, deputy dean at MIT Schwarzman; and David Goldston, director of the MIT Washington Office, provide an overview of AI governance in the U.S in consultation with an ad hoc committee on AI regulation. Among other ideas, they suggest that:

  • Regulations to shape AI need to be developed in tandem with the technology.
  • The first step in AI governance should be to ensure that current regulations apply to AI to the greatest extent possible. If a human activity without the use of AI is regulated, such as in health care and the law, then that activity combined with the use of AI should be similarly regulated. This will ensure that many higher-risk applications of AI are covered, since those are areas for which laws and regulations have already been developed.
  • Providers of AI systems should be required to identify the intended purposes of a system before it is deployed.

Regulating large language models

Large language models, which serve as the foundation for general-purpose AI systems such as GPT-4, are a transformative technology. But because of their broad applicability, evolving capabilities, unpredictable behavior, and widespread availability, they present challenges that need to be addressed through regulation, according to MIT professors Yoon Kim, Jacob Andreas, and Dylan Hadfield-Menell.

Regulations should take into consideration whether the model is general purpose or task-specific and how widely available it is, the authors write. 

Innovations that could boost large language model safety include verifiable attribution, hard-to-remove watermarks, better guardrails, and auditability.

Pursuing pro-worker AI

Related Articles

A framework for assessing AI risk
It’s time to make sure everyone understands generative AI
How generative AI can boost highly skilled workers’ productivity

Digital technologies have significantly increased inequality over the past 40 years, according to MIT economistsDavid Autor, and Generative AI will impact inequality, the authors write, but the nature of the effect depends on how it is developed and applied, and whether AI complements humans. The authors note that some AI applications hold more promise for workers than others, such as those that personalize teaching tools, improve accessibility to health care while lowering costs, or train modern craft workers, such as teachers, nurses, and electricians.

Labeling AI-generated content

Labels are often proposed as a strategy to combat the harms of generative AI; MIT experts have developed a framework to help platforms, practitioners, and policymakers weigh different factors regarding labeling AI-generated content.

Labels have two goals, the authors note: Communicating the process through which content was created, and lowering the likelihood that content will mislead or deceive.

Other important issues include deciding what types of content to label and how to identify it, considering the inferences people will draw about labeled and unlabeled content, and evaluating how efficient labels are in different media formats and countries.

The policy brief was written by MIT Sloan professor MIT professor Adam Berinsky, research assistant Ziv Epstein, and postdoctoral associate Chloe Wittenberg.

Read next: Generative AI research from MIT Sloan 

For more info Sara Brown Senior News Editor and Writer